Tuberculosis (TB) is one of the top 10 causes of death worldwide. The diagnosis and treatment of TB in its early stages is fundamental to reducing the rate of people affected by this disease. In order to assist specialists in the diagnosis in bright field smear images, many studies have been developed for the automatic Mycobacterium tuberculosis detection, the causative agent of Tb. To contribute to this theme, a method to bacilli detection associating convolutional neural network (CNN) and a mosaic-image approach was implemented. The propose was evaluated using a robust image dataset validated by three specialists. Three CNN architectures and 3 optimization methods in each architecture were evaluated. The deeper architecture presented better results, reaching accuracies values above 99%. Other metrics like precision, sensitivity, specificity and F1-score were also used to assess the CNN models performance.The in-vivo optical imaging of the cortical surface provides the ability to record different types of biophysiological signals, e.g., structural information, intrinsic signals, like blood oxygenation coupled reflection changes as well as extrinsic properties of voltage sensitive probes, like fluorescent voltage-sensitive dyes. The recorded data sets have very high temporal and spatial resolutions on a meso- to macroscopic scale, which surpass conventional multi-electrode recordings. Both, intrinsic and functional data sets, each provide unique information about temporal and spatial dynamics of cortical functioning, yet have individual drawbacks. To optimize the informational value it would thus be opportune to combine different types of optical imaging in a near simultaneous recording.Due to the low signal-to-noise ratio of voltage-sensitive dyes it is necessary to reduce stray light pollution below the level of the camera's dark noise. It is thus impossible to record full-spectrum optical data sets. We address this problem by a time-multiplexed illumination, bespoke to the utilized voltage sensitive dye, to record an alternating series of intrinsic and extrinsic frames by a high-frequency CMOS sensor. These near simultaneous data series can be used to compare the mutual influence of intrinsic and extrinsic dynamics (with regards to extracorporeal functional imaging) as well as for motion compensation and thus for minimizing frame averaging, which in turn results in increased spatial precision of functional data and in a reduction of necessary experimental data sets (3R principle).We present a robust, precise image binarization technique for automatically detecting filamentous microorganisms from digital fluorescence microscopy scans, with application to finding the pseudohyphae that are fungal pathogens responsible for Candida vaginitis. This method employs a hybrid constant false positive rate processor that integrates cell average and order statistic detectors, with linear windows at multiple orientation angles. The hypothesis test rule incorporates elongation enhancement and region of interest masking. Our approach achieves the adaptivity to local noise and all possible object orientations. The designed processor is evaluated theoretically and experimentally using clinical images. Successful detection results are demonstrated.Fluorescence lifetime is effective in discriminating cancerous tissue from normal tissue, but conventional discrimination methods are primarily based on statistical approaches in collaboration with prior knowledge. This paper investigates the application of deep convolutional neural networks (CNNs) for automatic differentiation of ex-vivo human lung cancer via fluorescence lifetime imaging. Around 70,000 fluorescence images from ex-vivo lung tissue of 14 patients were collected by a custom fibre-based fluorescence lifetime imaging endomicroscope. Five state-of-the-art CNN models, namely ResNet, ResNeXt, Inception, Xception, and DenseNet, were trained and tested to derive quantitative results using accuracy, precision, recall, and the area under receiver operating characteristic curve (AUC) as the metrics. The CNNs were firstly evaluated on lifetime images. Since fluorescence lifetime is independent of intensity, further experiments were conducted by stacking intensity and lifetime images together as the input to the CNNs. As the original CNNs were implemented for RGB images, two strategies were applied. One was retaining the CNNs by putting intensity and lifetime images in two different channels and leaving the remaining channel blank. The other was adapting the CNNs for two-channel input. Quantitative results demonstrate that the selected CNNs are considerably superior to conventional machine learning algorithms. Combining intensity and lifetime images introduces noticeable performance gain compared with using lifetime images alone. In addition, the CNNs with intensity-lifetime RGB image is comparable to the modified two-channel CNNs with intensity-lifetime two-channel input for accuracy and AUC, but significantly better for precision and recall.Automatic identification of subcellular compartments of proteins in fluorescence microscopy images is an important task to quantitatively evaluate cellular processes. A common problem for the development of deep learning based classifiers is that there is only a limited number of labeled images available for training. https://www.selleckchem.com/products/OSI-906.html To address this challenge, we propose a new approach for subcellular organelles classification combining an effective and efficient architecture based on a compact Convolutional Neural Network and deep embedded clustering algorithm. We validate our approach on a benchmark of HeLa cell microscopy images. The network both yields high accuracy that outperforms state of the art methods and has significantly small number of parameters. More interestingly, experimental results show that our method is strongly robust against limited labeled data for training, requiring four times less annotated data than usual while maintaining the high accuracy of 93.9%.Precise three-dimensional segmentation of choroidal vessels helps us understand the development and progression of multiple ocular diseases, such as agerelated macular degeneration and pathological myopia. Here we propose a novel automatic choroidal vessel segmentation framework for swept source optical coherence tomography (SS-OCT) to visualize and quantify three-dimensional choroidal vessel networks. Retinal pigment epithelium (RPE) was delineated from volumetric data and enface frames along the depth were extracted under the RPE. Choroidal vessels on the first enface frame were labeled by adaptive thresholding and each subsequent frame was segmented via segment propagation from the frame above and was in turn used as the reference for the next frame. Choroid boundary was determined by structural similarity index between adjacent frames. The framework was tested on 3?3 mm SS-OCT volumes acquired by a prototype SS-OCT system (PlexElite 9000, Zeiss Meditec, Dublin, CA, US), and vessel metrics including perfusion density, vessel density and mean vessel diameter were computed.